Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available June 10, 2026
-
Free, publicly-accessible full text available April 25, 2026
-
Free, publicly-accessible full text available June 10, 2026
-
Simulations are widely used to teach science in grade schools. These Ralph Knipper rak0035@auburn.edu Auburn University Auburn, Alabama, USA Sadhana Puntambekar puntambekar@education.wisc.edu University of Wisconsin-Madison Madison, Wisconsin, USA Large Language Models, Conversational AI, Meta-Conversation, simulations are often augmented with a conversational artificial intelligence (AI) agent to provide real-time scaffolding support for students conducting experiments using the simulations. AI agents are highly tailored for each simulation, with a predesigned set of Instructional Goals (IGs). This makes it difficult for teachers to adjust IGs as the agent may no longer align with the revised IGs. Additionally, teachers are hesitant to adopt new third-party simulations for the same reasons. In this research, we introduce SimPal, a Large Language Model (LLM) based meta-conversational agent, to solve this misalignment issue between a pre-trained conversational AI agent and the constantly evolving pedagogy of instructors. Through natural conversation with SimPal, teachers first explain their desired IGs, based on which SimPal identifies a set of relevant physical variables and their relationships to create symbolic representations of the desired IGs. The symbolic representations can then be leveraged to design prompts for the original AI agent to yield better alignment with the desired IGs. We empirically evaluated SimPal using two LLMs, ChatGPT-3.5 and PaLM 2, on 63 Physics simulations from PhET and Golabz. Additionally, we examined the impact of different prompting techniques on LLM’s performance by utilizing the TELeR taxonomy to identify relevant physical variables for the IGs. Our findings showed that SimPal can do this task with a high degree of accuracy when provided with a well-defined prompt.more » « less
-
Examining the effect of automated assessments and feedback on students’ written science explanationsWriting scientific explanations is a core practice in science. However, students find it difficult to write coherent scientific explanations. Additionally, teachers find it challenging to provide real-time feedback on students’ essays. In this study, we discuss how PyrEval, an NLP technology, was used to automatically assess students’ essays and provide feedback. We found that students explained more key ideas in their essays after the automated assessment and feedback. However, there were issues with the automated assessments as well as students’ understanding of the feedback and revising their essays.more » « less
-
Building causal knowledge is critical to science learning and scientific explanations that require one to understand the how and why of a phenomenon. In the present study, we focused on writing about the how and why of a phenomenon. We used natural language processing (NLP) to provide automated feedback on middle school students’ writing about an underlying principle (the law of conservation of energy) and its related concepts. We report the role of understanding the underlying principle in writing based on NLP-generated feedback.more » « less
An official website of the United States government

Full Text Available